[ Home ] [ Authors ] [ Index ] [ Abbreviations ] [ Bindings ]

Technical Rationale

Theoreticians typically strive for simplicity, looking for the smallest set of primitives on which they can build a general methodology. In computer science theory, for example, numerous languages have been devised that reflect this ideal by providing in a few primitives an expressive power at least equivalent to Turing machines. Practitioners view such efforts the way they view Turing machines: they make an interesting abstraction, but they don't tell us much about how to build systems. To a practitioner, the value of a language lies in its utility, and both excessive simplicity and excessive complexity can interfere with its utility.

This tension between theorists and practitioners is healthy, and the best solutions emerge from compromise. But a major risk in compromise is "creeping featurism." In an effort to win the broadest support, features and options are added to a language until all semblance of simplicity has been lost. Usually, however, these features have merit. Their advocates can cite numerous applications, and win their acceptance through compelling arguments about the utility of the features.

A reasonable way to balance the pressures of breadth and simplicity is to support heterogeneity. Failure to do so has doomed some otherwise laudable efforts. For example, Common Lisp did not initially define a "foreign function interface," presumably because everything could be done in Lisp. Practitioners, who built Common Lisp systems, realized that this was not a practical approach, and built their own, mutually incompatible foreign function interfaces. As a result, Lisp systems were only interchangeable if programs used no foreign functions. And few significant programs qualified. Standardization failed.

Small, specialized languages and tools are useful. For example, the programming language emboddied in a typical spreadsheet program is extremely useful. But it is certainly not general, in that there are many applications for which it is inappropriate. Such tools are therefore most useful if they can be combined with other specialized tools. Each tool is developed independently, making the development effort manageably small. But by embedding the tool in an environment with appropriate interfaces between tools, the utility of the tool is greatly magnified.

Looser integration of diverse capabilities has numerous advantages:

There are also significant disadvantages:

We believe that with system level design problems, the advantages outweigh the disadvantages. The Ptolemy project aims to ameliorate the disadvantages. The Ptolemy software system, which has been under development at U.C. Berkeley since 1990, addresses the problem of mixing "models of computation", or semantic models. The Tycho software system, under development since 1995 as a part of the Ptolemy project, addresses the problem of mixing syntactic models. For example, while Ptolemy can support mixtures of discrete-event and dataflow modeling, Tycho can support mixtures of textual and graphical syntaxes.

In both Ptolemy and Tycho, the goals are accomplished through a strict and carefully designed object-oriented software architecture. A key principal is that of "information hiding". Objects should be able to interact with one another without knowing about either their internal semantic or syntactic models.


Copyright © 1996, The Regents of the University of California. All rights reserved.
Last updated: 96/04/11, comments to: eal@eecs.berkeley.edu